AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Flash Linear Attention

# Flash Linear Attention

Rwkv7 2.9B World GGUF
Apache-2.0
RWKV-7 architecture with 2.9 billion parameters, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
R
Mungert
748
3
RWKV7 Goose Pile 168M HF
Apache-2.0
RWKV-7 model using Flash Linear Attention format, trained on the Pile dataset, supporting English text generation tasks.
Large Language Model Transformers English
R
RWKV
57
2
RWKV7 Goose World3 1.5B HF
Apache-2.0
The RWKV-7 model in flash-linear attention format, supporting English text generation tasks.
Large Language Model English
R
RWKV
70
2
RWKV7 Goose World3 2.9B HF
Apache-2.0
The RWKV-7 model adopts the flash linear attention format, supports multilingual text generation tasks, and has a parameter count of 2.9 billion.
Large Language Model Supports Multiple Languages
R
RWKV
132
7
Rwkv7 1.5B World
Apache-2.0
The RWKV-7 model adopts a flash linear attention architecture and supports multilingual text generation tasks.
Large Language Model Transformers Supports Multiple Languages
R
fla-hub
632
9
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase